-
Notifications
You must be signed in to change notification settings - Fork 2.4k
feat: Add Grok 4 Fast model to xAI provider #8476
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Added grok-4-fast model configuration to xaiModels in packages/types/src/providers/xai.ts - Added test case to verify grok-4-fast model selection works correctly - Model includes 2M context window and 30K max tokens support Fixes #8474
|
Beginning a full review now - I will add inline comments shortly. |
…xAI specs - Fix inputPrice from .0 to /bin/sh.20 per million tokens - Fix outputPrice from .0 to /bin/sh.50 per million tokens - Update maxTokens from 30,000 to 8,192 (matching grok-4, as xAI doesn't publish max output limit) - Remove subjective 'SOTA cost-efficiency' wording from description
Review SummaryI found 1 critical issue that needs to be addressed:
|
| expect(model.id).toBe(testModelId) | ||
| expect(model.info).toEqual(xaiModels[testModelId]) | ||
| expect(model.info.contextWindow).toBe(2_000_000) | ||
| expect(model.info.maxTokens).toBe(30_000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test expects maxTokens to be 30_000 but the model configuration (line 32 in xai.ts) defines it as 8192. This mismatch was introduced when the second commit updated the pricing and maxTokens but didn't update the test. The test will fail when run.
| expect(model.info.maxTokens).toBe(30_000) | |
| expect(model.id).toBe(testModelId) | |
| expect(model.info).toEqual(xaiModels[testModelId]) | |
| expect(model.info.contextWindow).toBe(2_000_000) | |
| expect(model.info.maxTokens).toBe(8192) |
The test was expecting 30,000 maxTokens but we corrected this to 8,192 to match grok-4, as xAI doesn't publish a maximum output token limit for grok-4-fast.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found.
Description
This PR attempts to address Issue #8474 by adding support for the Grok 4 Fast model in the xAI provider.
Changes
grok-4-fastmodel configuration toxaiModelsinpackages/types/src/providers/xai.tsTesting
Related Issue
Fixes #8474
Notes
The model parameters (context window, max tokens, pricing) are based on the configuration already present in the Roo provider models. This enables users with xAI accounts to directly select and use Grok 4 Fast.
Feedback and guidance are welcome!
Important
Add
grok-4-fastmodel to xAI provider with specific parameters and test for correct model selection.grok-4-fastmodel toxaiModelsinxai.tswith 2,000,000 token context window, 8192 max tokens, and specific pricing.xai.spec.tsverifiesgrok-4-fastmodel selection.grok-4-fastmodel selection added and passes.This description was created by
for 7bea3b1. You can customize this summary. It will automatically update as commits are pushed.